multiple selection - translation to german
Diclib.com
ChatGPT AI Dictionary
Enter a word or phrase in any language 👆
Language:     

Translation and analysis of words by ChatGPT artificial intelligence

On this page you can get a detailed analysis of a word or phrase, produced by the best artificial intelligence technology to date:

  • how the word is used
  • frequency of use
  • it is used more often in oral or written speech
  • word translation options
  • usage examples (several phrases with translation)
  • etymology

multiple selection - translation to german

PROCEDURE IN MACHINE LEARNING AND STATISTICS
Input selection; Feature selection problem; Variable selection; Feature subset selection
  • Embedded method for Feature selection
  • Wrapper Method for Feature selection
  • Filter Method for feature selection

multiple selection         
LIST OF ITEMS ON WHICH USER OPERATIONS WILL TAKE PLACE
Text selection; Column selection; Column select; Rectangular block selection; Rectangular selection; Rectangular select; Multi-select; Multiple selection; Multiple selections; Text region
das Auswählen mehrerer Objekte
common multiple         
SMALLEST POSITIVE INTEGER DIVISIBLE BY TWO OR MORE INTEGERS
Lowest common multiple; Smallest common multiple; Least Common Multiple; Lowest common multiplier; Common multiples; Common multiple; Minimal common multiple
einfache Multiplizierung
least common multiple         
SMALLEST POSITIVE INTEGER DIVISIBLE BY TWO OR MORE INTEGERS
Lowest common multiple; Smallest common multiple; Least Common Multiple; Lowest common multiplier; Common multiples; Common multiple; Minimal common multiple
kleinster gemeinsamer Nenner

Definition

multiple unit
¦ noun a passenger train of two or more carriages powered by integral motors which drive a number of axles.

Wikipedia

Feature selection

In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Feature selection techniques are used for several reasons:

  • simplification of models to make them easier to interpret by researchers/users,
  • shorter training times,
  • to avoid the curse of dimensionality,
  • improve data's compatibility with a learning model class,
  • encode inherent symmetries present in the input space.

The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information. Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.

Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features. Feature selection techniques are often used in domains where there are many features and comparatively few samples (or data points). Archetypal cases for the application of feature selection include the analysis of written texts and DNA microarray data, where there are many thousands of features, and a few tens to hundreds of samples.